A brain–computer interface (BCI) is a promising technology that can analyze brain signals and control a robot or computer according to a user’s intention. This paper introduces our studies to overcome the challenges of using BCIs in daily life. There are several methods to implement BCIs, such as sensorimotor rhythms (SMR), P300, and steady-state visually evoked potential (SSVEP). These methods have different pros and cons according to the BCI type. However, all these methods are limited in choice. Controlling the robot arm according to the intention enables BCI users can do various things. We introduced the study predicting three-dimensional arm movement using a non-invasive method. Moreover, the study was described compensating the prediction using an external camera for high accuracy. For daily use, BCI users should be able to turn on or off the BCI system because of the prediction error. The users should also be able to change the BCI mode to the efficient BCI type. The BCI mode can be transformed based on the user state. Our study was explained estimating a user state based on a brain’s functional connectivity and a convolutional neural network (CNN). Additionally, BCI users should be able to do various tasks, such as carrying an object, walking, or talking simultaneously. A multi-function BCI study was described to predict multiple intentions simultaneously through a single classification model. Finally, we suggest our view for the future direction of BCI study. Although there are still many limitations when using BCI in daily life, we hope that our studies will be a foundation for developing a practical BCI system.
Loading....